Laplace distribution

Laplace
Probability density function
Cumulative distribution function
Parameters \mu\, location (real)
b > 0\, scale (real)
Support x \in (-\infty; %2B\infty)\,
PDF \frac{1}{2\,b} \exp \left(-\frac{|x-\mu|}b \right) \,
CDF see text
Mean \mu\,
Median \mu\,
Mode \mu\,
Variance 2\,b^2
Skewness 0\,
Ex. kurtosis 3\,
Entropy \log(2\,e\,b)
MGF \frac{\exp(\mu\,t)}{1-b^2\,t^2}\,\! for |t|<1/b\,
CF \frac{\exp(\mu\,i\,t)}{1%2Bb^2\,t^2}\,\!

In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions (with an additional location parameter) spliced together back-to-back, but the term double exponential distribution is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.

Contents

Characterization

Probability density function

A random variable has a Laplace(μ, b) distribution if its probability density function is

f(x|\mu,b) = \frac{1}{2b} \exp \left( -\frac{|x-\mu|}{b} \right) \,\!
    = \frac{1}{2b}
    \left\{\begin{matrix}
      \exp \left( -\frac{\mu-x}{b} \right) & \mbox{if }x < \mu
      \\[8pt]
      \exp \left( -\frac{x-\mu}{b} \right) & \mbox{if }x \geq \mu
    \end{matrix}\right.

Here, μ is a location parameter and b > 0 is a scale parameter. If μ = 0 and b = 1, the positive half-line is exactly an exponential distribution scaled by 1/2.

The probability density function of the Laplace distribution is also reminiscent of the normal distribution; however, whereas the normal distribution is expressed in terms of the squared difference from the mean μ, the Laplace density is expressed in terms of the absolute difference from the mean. Consequently the Laplace distribution has fatter tails than the normal distribution.

Cumulative distribution function

The Laplace distribution is easy to integrate (if one distinguishes two symmetric cases) due to the use of the absolute value function. Its cumulative distribution function is as follows:

F(x)\, = \int_{-\infty}^x \!\!f(u)\,\mathrm{d}u

   = \left\{\begin{matrix}
             &\frac12 \exp \left( \frac{x-\mu}{b} \right) & \mbox{if }x < \mu
             \\[8pt]
             1-\!\!\!\!&\frac12 \exp \left( -\frac{x-\mu}{b} \right) & \mbox{if }x \geq \mu
            \end{matrix}\right.
=0.5\,[1 %2B \sgn(x-\mu)\,(1-\exp(-|x-\mu|/b))].

The inverse cumulative distribution function is given by

F^{-1}(p) = \mu - b\,\sgn(p-0.5)\,\ln(1 - 2|p-0.5|).

Generating random variables according to the Laplace distribution

Given a random variable U drawn from the uniform distribution in the interval (-1/2, 1/2], the random variable

X=\mu - b\,\sgn(U)\,\ln(1 - 2|U|)

has a Laplace distribution with parameters μ and b. This follows from the inverse cumulative distribution function given above.

A Laplace(0, b) variate can also be generated as the difference of two i.i.d. Exponential(1/b) random variables. Equivalently, a Laplace(0, 1) random variable can be generated as the logarithm of the ratio of two iid uniform random variables.

Parameter estimation

Given N independent and identically distributed samples x1, x2, ..., xN, the maximum likelihood estimator \hat{\mu} of \mu is the sample median,[1] and the maximum likelihood estimator of b is

\hat{b} = \frac{1}{N} \sum_{i = 1}^{N} |x_i - \hat{\mu}|

(revealing a link between the Laplace distribution and least absolute deviations).

Moments

\mu_r' = \bigg({\frac{1}{2}}\bigg) \sum_{k=0}^r \bigg[{\frac{r!}{k! (r-k)!}} b^k \mu^{(r-k)} k! \{1 %2B (-1)^k\}\bigg]

Related distributions

Relation to the exponential distribution

A Laplace random variable can be represented as the difference of two iid exponential random variables.[2] One way to show this is by using the characteristic function approach. For any set of independent continuous random variables, for any linear combination of those variables, its characteristic function (which uniquely determines the distribution) can be acquired by multiplying the correspond characteristic functions.

Consider two i.i.d random variables X_1 ,X_2 \sim \mathrm{Exponential}(\lambda) \;. The characteristic functions for X_1,-X_2 are \frac{\lambda }{-i t%2B\lambda }, \frac{\lambda }{i t%2B\lambda }, respectively. On multiplying these characteristic functions (equivalent to the characteristic function of the sum of therandom variables X_1 %2B (-X_2)), the result is \frac{\lambda ^2}{(-i t%2B\lambda ) (i t%2B\lambda )} = \frac{\lambda ^2}{t^2%2B\lambda ^2}.

This is the same as the characteristic function for Y \sim \mathrm{Laplace}(0,1/\lambda) \;, which is \frac{1}{1%2B\frac{t^2}{\lambda ^2}}.

Sargan distributions

Sargan distributions are a system of distributions of which the Laplace distribution is a core member. A p'th order Sargan distribution has density[3][4]

f_p(x)=\frac{1}{2}(1%2B\sum_{j=1}^p j!\beta_j)^{-1}  \exp(-\alpha |x|) (1%2B\sum_{j=1}^p \beta_j \alpha^j |x|^j),

for parameters α > 0, βj   ≥ 0. The Laplace distribution results for p=0.

Applications

The Laplacian distribution has been used in speech recognition to model priors on DFT coefficients.

See also

References

  1. ^ Robert M. Norton (May 1984). "The Double Exponential Distribution: Using Calculus to Find a Maximum Likelihood Estimator". The American Statistician (American Statistical Association) 38 (2): 135–136. doi:10.2307/2683252. JSTOR 2683252. 
  2. ^ The Laplace distribution and generalizations: a revisit with applications to Communications, Economics, Engineering and Finance, Samuel Kotz,Tomasz J. Kozubowski,Krzysztof Podgórski, p.23 (Proposition 2.2.2, Equation 2.2.8). Online here.
  3. ^ Everitt, B.S. (2002) The Cambridge Dictionary of Statistics, CUP. ISBN 0-521-81099-x
  4. ^ Johnson, N.L., Kotz S., Balakrishnan, N. (1994) Continuous Univariate Distributions, Wiley. ISBN 0-471-58495-9. p. 60